منابع مشابه
Oracle Inequalities for Inverse Problems
We consider a sequence space model of statistical linear inverse problems where we need to estimate a function f from indirect noisy observations. Let a nite set of linear estimators be given. Our aim is to mimic the estimator in that has the smallest risk on the true f. Under general conditions, we show that this can be achieved by simple minimization of unbiased risk estimator, provided the s...
متن کاملglobal results on some nonlinear partial differential equations for direct and inverse problems
در این رساله به بررسی رفتار جواب های رده ای از معادلات دیفرانسیل با مشتقات جزیی در دامنه های کراندار می پردازیم . این معادلات به فرم نیم-خطی و غیر خطی برای مسایل مستقیم و معکوس مورد مطالعه قرار می گیرند . به ویژه، تاثیر شرایط مختلف فیزیکی را در مساله، نظیر وجود موانع و منابع، پراکندگی و چسبندگی در معادلات موج و گرما بررسی می کنیم و به دنبال شرایطی می گردیم که متضمن وجود سراسری یا عدم وجود سراسر...
Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems
A number of problems in nonparametric statistics and learning theory can be formulated as penalized empirical risk minimization over large function classes with penalties depending on the complexity of the functions (decision rules) involved in the problem. The goal of mathematical analysis of such procedures is to prove ”oracle inequalities” describing optimality properties of penalized empiri...
متن کاملOracle inequalities for cross-validation type procedures
Abstract We prove oracle inequalities for three different type of adaptation procedures inspired by cross-validation and aggregation. These procedures are then applied to the construction of Lasso estimators and aggregation with exponential weights with data-driven regularization and temperature parameters, respectively. We also prove oracle inequalities for the crossvalidation procedure itself...
متن کاملSharp Oracle Inequalities for Square Root Regularization
We study a set of regularization methods for high-dimensional linear regression models. These penalized estimators have the square root of the residual sum of squared errors as loss function, and any weakly decomposable norm as penalty function. This fit measure is chosen because of its property that the estimator does not depend on the unknown standard deviation of the noise. On the other hand...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: The Annals of Statistics
سال: 2002
ISSN: 0090-5364
DOI: 10.1214/aos/1028674843